Min-max discriminative training of decoding parameters using iterative linear programming

نویسندگان

  • Brian Kan-Wing Mak
  • Tom Ko
چکیده

In automatic speech recognition, the decoding parameters — grammar factor and word insertion penalty — are usually handtuned to give the best recognition performance. This paper investigates an automatic procedure to determine their values using an iterative linear programming (LP) algorithm. LP naturally implements discriminative training by mapping linear discriminants into LP constraints. A min-max cost function is also defined to get more stable and robust result. Empirical evaluations on the RM1 and WSJ0 speech recognition tasks show that decoding parameters found by the proposed algorithm are as good as those found by a brute-force grid search; their optimal values also seem to be independent of the initial values set to start the iterative LP algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Automatic estimation of decoding parameters using large-margin iterative linear programming

The decoding parameters in automatic speech recognition — grammar factor and word insertion penalty — are usually determined by performing a grid search on a development set. Recently, we cast their estimation as a convex optimization problem, and proposed a solution using an iterative linear programming algorithm. However, the solution depends on how well the development data set matches with ...

متن کامل

Linear Programming-Based Decoding of Turbo-Like Codes and its Relation to Iterative Approaches

In recent work (Feldman and Karger [8]), we introduced a new approach to decoding turbo-like codes based on linear programming (LP). We gave a precise characterization of the noise patterns that cause decoding error under the binary symmetric and additive white Gaussian noise channels. We used this characterization to prove that the word error rate is bounded by an inverse polynomial in the cod...

متن کامل

Discriminative Learning of Max-Sum Classifiers

The max-sum classifier predicts n-tuple of labels from n-tuple of observable variables by maximizing a sum of quality functions defined over neighbouring pairs of labels and observable variables. Predicting labels as MAP assignments of a Random Markov Field is a particular example of the max-sum classifier. Learning parameters of the max-sum classifier is a challenging problem because even comp...

متن کامل

Max-Violation Perceptron and Forced Decoding for Scalable MT Training

While large-scale discriminative training has triumphed in many NLP problems, its definite success on machine translation has been largely elusive. Most recent efforts along this line are not scalable (training on the small dev set with features from top ∼100 most frequent words) and overly complicated. We instead present a very simple yet theoretically motivated approach by extending the recen...

متن کامل

Structured Discriminative Models for Sequential Data Classification

The use of discriminative models for structured classification tasks, such as automatic speech recognition is becoming increasingly popular. The major contribution of this first-year work is we proposed a large margin structured log-linear model for noise robust continuous ASR. An important aspect of log-linear models is the form of the features. The features used in our structured log linear m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008